通用事件边界检测(GEBD)是视频理解中的一项重要但挑战性的任务,该任务旨在检测人类自然感知事件边界的时刻。在本文中,我们为GEBD任务提供了本地上下文建模和全局边界解码方法。提出了局部上下文建模子网络来感知通用事件边界的各种模式,并生成强大的视频表示和可靠的边界信心。基于它们,全局边界解码子网络被利用为从全局视图解码事件边界。我们提出的方法在动力学-GEBD测试集上达到了85.13%的F1得分,与基线方法相比,它实现了22%以上的F1得分增强。该代码可从https://github.com/jackytown/gebd_challenge_cvpr2022获得。
translated by 谷歌翻译
通用事件边界检测是视频理解中重要但具有挑战性的任务,旨在检测人类自然感知事件界限的时刻。这项任务的主要挑战是察觉各种事件边界的各种时间变化。为此,本文提出了一个有效和最终的学习框架(DDM-Net)。为了解决事件边界的多样性和复杂的语义,我们提出了三个显着的改进。首先,我们构建一个功能银行来存储空间和时间的多级功能,为多个尺度进行差异计算。其次,为了减轻先前方法的时间模型不足,我们呈现密集差异图(DDM)以全面地表征运动模式。最后,我们利用逐步关注多级DDM,共同聚集出外观和运动线索。因此,DDM-Net分别在Kinetics-Gebd和TapCOS基准上实现了14%和8%的显着提高,并且优于Loveu挑战@ CVPR 2021的前1名获胜者解决方案而没有钟声和吹口哨。最先进的结果展示了更丰富的运动表示和更复杂的聚合的有效性,在处理通用事件边界检测的多样性方面。我们的代码将很快推出。
translated by 谷歌翻译
引用图像分割是一种基本愿景 - 语言任务,旨在分割由图像中的自然语言表达式引用的对象。这项任务背后的一个关键挑战是利用引用表达式来突出显示图像中的相关位置。解决此问题的范例是利用强大的视觉语言(“跨模型”)解码器到从视觉编码器和语言编码器独立提取的保险丝特征。最近的方法通过利用变换器作为跨模型解码器,并将变换器在许多其他视觉语言任务中的压倒性成功的同时进行了显着的进步。在这项工作中采用不同的方法,我们表明,通过在视觉变压器编码器网络的中间层中的语言和视觉特征的早期融合,可以实现更好的跨模型对准。通过在视觉特征编码阶段进行跨模型特征融合,我们可以利用变压器编码器的良好相关建模功率,以便挖掘有用的多模态上下文。通过这种方式,用轻型掩模预测器容易地收获精确的分段结果。没有钟声和口哨,我们的方法超越了在Refcoco,Refcoco +和G-Ref上的先前最先进的方法。
translated by 谷歌翻译
自动数学问题解决最近引起了越来越多的关注作为长期的AI基准。在本文中,我们专注于解决几何问题,这需要全面了解文本描述,视觉图和定理知识。但是,现有方法高度依赖于手工规则,并且仅在小规模数据集上进行评估。因此,我们提出了一个几何问题应答DataSet GeoQA,其中包含4,998个几何问题,其中具有相应的注释程序,其说明了给定问题的解决过程。与另一个公开的数据集GEOS相比,GeoQA是25倍,程序注释可以为未来的明确和解释数值推理提供实际测试平台。此外,我们通过全面解析多媒体信息和产生可解释程序来引入神经几何求解器(NGS)来解决几何问题。我们进一步为NGS添加了多个自我监督的辅助任务,以增强跨模型语义表示。关于GeoQA的广泛实验验证了我们提出的NGS和辅助任务的有效性。然而,结果仍然明显低于人类性能,这为未来的研究留下了大型空间。我们的基准和代码在https://github.com/chen-judge/geoqa发布。
translated by 谷歌翻译
在本文中,我们提出了一种使用神经网络的生存分析模型,以及可伸缩优化算法。直接应用最大似然估计(MLE)缩短数据的一个关键技术挑战是评估目标函数及其梯度相对于模型参数需要计算积分。为了解决这一挑战,我们认识到,可以将用于审查数据的MEE视为差分方程约束优化问题,这是一种新颖的视角。在此连接之后,我们通过普通微分方程模拟事件时间的分布,并利用有效的颂歌求解器并伴随敏感性分析来数值评估可能性和梯度。使用这种方法,我们能够1)提供广泛的连续时间存活分布,无需强大的结构假设,2)使用神经网络获得强大的特征表示,3)允许在大规模应用中使用模型估计模型随机梯度下降。通过仿真研究和现实世界数据示例,我们展示了所提出的方法与现有的最先进的深度学习生存分析模型相比的有效性。已在HTTPS://github.com/Jiaqima/soden公开提供拟议的SODEN方法。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Neural models with an encoder-decoder framework provide a feasible solution to Question Generation (QG). However, after analyzing the model vocabulary we find that current models (both RNN-based and pre-training based) have more than 23\% inflected forms. As a result, the encoder will generate separate embeddings for the inflected forms, leading to a waste of training data and parameters. Even worse, in decoding these models are vulnerable to irrelevant noise and they suffer from high computational costs. In this paper, we propose an approach to enhance the performance of QG by fusing word transformation. Firstly, we identify the inflected forms of words from the input of encoder, and replace them with the root words, letting the encoder pay more attention to the repetitive root words. Secondly, we propose to adapt QG as a combination of the following actions in the encode-decoder framework: generating a question word, copying a word from the source sequence or generating a word transformation type. Such extension can greatly decrease the size of predicted words in the decoder as well as noise. We apply our approach to a typical RNN-based model and \textsc{UniLM} to get the improved versions. We conduct extensive experiments on SQuAD and MS MARCO datasets. The experimental results show that the improved versions can significantly outperform the corresponding baselines in terms of BLEU, ROUGE-L and METEOR as well as time cost.
translated by 谷歌翻译